Perspectives on Cognitive Informatics and Cognitive Computing

نویسندگان

  • Yingxu Wang
  • George Baciu
  • Yiyu Yao
  • Witold Kinsner
  • Keith Chan
  • Bo Zhang
  • Stuart R. Hameroff
  • Ning Zhong
  • Chu-Ren Huang
  • Ben Goertzel
  • Duoqian Miao
  • Kenji Sugawara
  • Guoyin Wang
  • Jane You
  • Du Zhang
  • Haibin Zhu
چکیده

Cognitive informatics is a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems based on cognitive informatics that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. This article presents a set of collective perspectives on cognitive informatics and cognitive computing, as well as their applications in abstract intelligence, computational intelligence, computational linguistics, knowledge representation, symbiotic computing, granular computing, semantic computing, machine learning, and social computing. DOI: 10.4018/jcini.2010010101 2 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. INTRODUCTION Definition 1: Cognitive Informatics (CI) is a transdisciplinary enquiry of computer science, information science, cognitive science, and intelligence science that investigates into the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing (Wang, 2002a, 2003a, 2003b, 2004, 2005, 2007b, 2008b, 2009a; Wang & Kinsner, 2007; Wang & Wang, 2006; Wang, Kinsner, & Zhang, 2009a, 2009b; Wang et al., 2006, 2009). The latest advances and engineering applications of CI have led to the emergence of cognitive computing and the development of cognitive computer that think and learn, as well as autonomous agent systems. Definition 2: Cognitive Computing (CC) is an emerging paradigm of intelligent computing methodologies and systems based on cognitive informatics that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain (Wang, 2002a, 2009b, 2009g). CC is emerged and developed based on the transdisciplinary research in cognitive informatics, abstract intelligence, and denotational mathematics since the inauguration of the 1st IEEE International Conference on Cognitive Informatics (ICCI 2002, see Figure 1) (Wang et al., 2002, 2008). Figure 1. IEEE ICCI’08 keynote speakers and co-chairs at Stanford University (from right to left: Jean-Claude Latombe, Lotfi A. Zadeh, Yingxu Wang, Witold Kinsner, and Du Zhang) International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 3 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Definition 3: Abstract Intelligence (αI) is the general mathematical form of intelligence as a natural mechanism that transfers information into behaviors and knowledge (Wang, 2009a). Typical paradigms of αI are natural intelligence, artificial intelligence, machinable intelligence, and computational intelligence, as well as their hybrid forms. Definition 4: Denotational Mathematics (DM) is a category of expressive mathematical structures that deals with high-level mathematical entities beyond numbers and sets, such as abstract objects, complex relations, perceptual information, abstract concepts, knowledge, intelligent behaviors, behavioral processes, and systems (Wang, 2002b,2007a,2008a,2008c,2008d,2008e, 2009d,2009f;Wang, Zadeh & Yao, 2009). In recognizing mathematics as the metamethodology of all sciences and engineering disciplines, a set of DMs have been created and applied in CI, αI, CC, AI, soft computing, computational intelligence, and fuzzy inferences. The IEEE ICCI series has been established since 2002 (Wang, 2002a, 2003b; Wang et al., 2002). Since its inception, ICCI has been growing steadily in its size, scope, and depth. It attracts worldwide researchers from academia, government agencies, and industry practitioners. The conference series provides a main forum for the exchange and cross-fertilization of ideas in the new research field of CI toward revealing the cognitive mechanisms and processes of human information processing and the approaches to mimic them in cognitive computing. The theoretical framework of CI (Wang, 2007b) encompasses a) fundamental theories of natural intelligence; b) abstract intelligence; c) denotational mathematics; and d) cognitive computing as follows. • CI Theories: Fundamental theories developed in CI covers the Information-Matter-Energy-Intelligence (IME-I) model (Wang, 2007a), the Layered Reference Model of the Brain (LRMB) (Wang et al., 2006), the Object-Attribute-Relation (OAR) model of information representation in the brain (Wang, 2007c), the cognitive informatics model of the brain (Wang & Wang, 2006), Natural Intelligence (NI) (Wang, 2007b), and neuroinformatics (Wang, 2007b). Recent studies on LRMB in cognitive informatics reveal an entire set of cognitive functions of the brain and their cognitive process models, which explain the functional mechanisms and cognitive processes of the natural intelligence with 43 cognitive processes at seven layers known as the sensation, memory, perception, action, metacognitive, meta-inference, and higher cognitive layers (Wang et al., 2006). • Abstract Intelligence (αI): The studies on αI form a human enquiry of both natural and artificial intelligence at the reductive levels of neural, cognitive, functional, and logical from the bottom up (Wang, 2009a). The paradigms of αI are such as natural, artificial, machinable, and computational intelligence. The studies in CI and αI lay a theoretical foundation toward revealing the basic mechanisms of different forms of intelligence. As a result, cognitive computers may be developed, which are characterized as knowledge processors beyond those of data processors in conventional computing. • Denotational Mathematics (DM): DM is a category of expressive mathematical structures that deals with high-level mathematical entities beyond numbers and sets, such as abstract objects, complex relations, perceptual information, abstract concepts, knowledge, intelligent behaviors, behavioral processes, and systems (Wang, 2008a). It is recognized that the maturity of a scientific discipline is characterized by the maturity of its 4 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. mathematical (meta-methodological) means. Typical paradigms of DM are such as concept algebra (Wang, 2008c), system algebra (Wang, 2008d; Wang, Zadeh & Yao, 2009), realtime process algebra (Wang, 2002b, 20097a, 2008e), granular algebra (Wang, 2009h), visual semantic algebra (Wang, 2009f), fuzzy quantification/qualification, fuzzy inferences, and fuzzy causality analyses. DM provides a coherent set of contemporary mathematical means and explicit expressive power for CI, αI, CC, AI, and computational intelligence. • Cognitive Computing (CC): As presented in Definition 2, the latest advances in CI, αI, and DM have led to a systematic solution for future generation intelligent computers known as cognitive computers that think and learn (Wang, 2006, 2009b), which will enable the simulation of machinable thought such as computational inferences, reasoning, and causality analyses. A wide range of applications of CI, αI, CC, and DM is expected toward the implementation of highly intelligent machinable thought such as formal inference, symbolic reasoning, problem solving, decision making, cognitive knowledge representation, semantic searching, and autonomous learning. • Applications of CI, αI, CC, and DM: The key applications of the above cutting-edge fields in CI can be divided into two categories. The first category of applications uses informatics and computing techniques to investigate problems of intelligence science, cognitive science, and knowledge science, such as abstract intelligence, memory, learning, and reasoning. The second category of applications includes the areas that use cognitive informatics theories to investigate problems in informatics, computing, software engineering, knowledge engineering, and computational intelligence. CI focuses on the nature of information processing in the brain, such as information acquisition, representation, memory, retrieval, creation, and communication. Through the interdisciplinary approach and with the support of modern information and neuroscience technologies, mechanisms of the brain and the mind may be systematically explored within the framework of CI. COGNITIVE INFORMATICS AND GLOBAL CONSCIOUSNESS In this section, we emphasize the need for the scalability of the state of cognitive informatics and modeling of human intent, emotions, and perceived reality. Much of the fascination with cognition is due to the learnt experience that surrounds the attempts to automate emotional perceptions in the context of current events. We emphasize the need to scale up the cognitive models to a global consciousness that is now aided by multi-sensory networks interacting with a perceived global awareness facilitated by integrated computer and sensory networks. Questions of existence have transcended the borders of sciences and non-empirical models of the perceived world surrounding our senses. Are these questions different now than they were five hundred or a thousand years ago? Are our minds evolving? Or, do we perceive a relativistic time dilation effect in the information age? Beginning to understand self-awareness in the context of time and space, that is bringing consciousness to one-self as the subject of consciousness in the current process of cognition is accelerating. The enabling technology is the diversification of computer networks and the Internet. Many have captured the essence of information exchange in the context of recent past, but few have understood the direction of the evolutionary process ahead of the process itself. On May 26, 1995, Bill Gates sent one of his well known memos, preceding the information technology revolution. Internet would “set the course of our industry for a long time to come.” In his “Internet Tidal Wave” memo Gates declares: “In this memo I want to make clear that our focus on the Internet is critical to every part of our business.” This was only a year after Netscape launched International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 5 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. its browser in 1994. Was this a vision or an accident? What made his “vision” so clear? Was it a random inspiration or was it a time dilation effect in the information exchange domain? In order to answer this question, we must probe deeper into the differences between our synaptic neural networks and the von Neumann machine. It remains to be seen if the answer to this question can be found in the patterns of the Law of Powers of 10 (Pirolli, 2009). • Timely Information: In relativistic terms, time dilates as the speed of a particle approaches c, the speed of light. As we are all becoming aware of global events, such as the financial tidal wave, our receptors are continuously updated with a stream of text, images, video, and sound. The information meta-structures are taking new forms. Their delivery and often their representation have evolved as the speed of transmission and much enlarged memory capacity seem to make our perceptual reference clock slow down. How is our world different from 1995? One answer is Google. Large quantities of information can now be mined in milliseconds. For example, a simple pattern “*a*” on Google currently gives: “Results 1 10 of about 18,860,000,000 for *a*. (0.04 seconds),” that is, 18 billion pages in 40 milliseconds. • Global Consciousness: Most interestingly, information transfer through our receptors has taken almost paradoxical forms such as, for example, in the Global Consciousness Project at Princeton (Nelson, 2000). Started in 1970’s by Robert Jahn (2000) with a simple random number generating device, the REG – Random Event Generator, the Global Consciousness Project is currently serving as a seismograph for tremors in the global consciousness medium. Designed as a large scale sensor network, it is intended to capture alterations in the randomness of background noise, potentially showing polarization of group mind activity when events of considerable significance take place. Since 1998, the Global Consciousness Project (GCP), also known as the Princeton EGG (ElectroGaiaGram = electroencephalogram + Gaia), archives random data in parallel sequences of synchronized 200-bit trials every second continuously from a global network of physical random number generators located in more than 65 host sites around the world. The objective is to capture large scale interactions between physical systems and human emotions expressed through brain activity in the context of perceived reality events. It claims to have accurately indicated the global mourning during Princess Diana’s funeral in Sep. 1997, and it spiked off the charts before and during 9/11. This experiment is certainly one of the most representative scientific awakenings of cognition in search for global consciousness. It is one of the best representations of large scale network sensors that could provide a scientific link between mind-machine interfaces. It is a platform that shows in real-time the dependency between information transfer and time, the basic ingredients of cognitive informatics (Wang, 2002a, 2007b; Wang et al., 2009). Are we beginning to acknowledge the synaptic formation of a global consciousness? The answer may be realized as the integral relationship between time and awareness in the neural fabric of a network of minds. ICCI 2009 (Baciu, Wang, et al., 2009) is our modest attempt to address the many facets of cognition in the context of perception of reality and the extrapolation of brain power to “machine intelligence.” We certainly hope that we are starting to pave the road to the next level of mind-brain interactions and the machines that could support it. 6 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. COGNITIVE INFORMATICS AND COGNITIVE DYNAMICAL SYSTEMS Many developments of the last century centered around adaptation and adaptive systems. The focus in this century appears to be shifting towards cognition and cognitive dynamical systems with emergence (Kinsner, 2007). Although cognitive dynamical systems are always adaptive to various conditions in the environment, adaptive systems of the past have not been cognitive. The evolving formulation of cognitive informatics (CI) (Wang, 2002a; Wang et al., 2009) has been an important step in bringing the diverse areas of science, engineering, and technology required to develop such cognitive systems. Current examples of various cognitive systems include autonomic computing, cognitive radio, cognitive radar, cognitive robots, cognitive networks, cognitive computers, cognitive cars, cognitive factories, as well as brain-machine interfaces for physically-impaired persons, and cognitive binaural hearing instruments. The phenomenal interest in this area may be due to the recognition that perfect solutions to large-scale scientific and engineering problems may not be feasible, and we should seek the best solution for the task at hand. The “best” means suboptimal and the most reliable (robust) solution, given not only limited resources (financial and environmental) but also incomplete knowledge of the problem and partial observability of the environment. Many new theoretical, computational and technological developments have been described at this conference and related journals. The challenges can be grouped into several categories: (a) theoretical, (b) technological, and (c) sociological. The first group of theoretical issues include modelling, reformulation of information and entropy, multiscale measures and metrics, and management of uncertainty. Modelling of cognitive systems requires radically new approaches. Reductionism has dominated our scientific worldview for the last 350 years, since the times of Descartes, Galileo, Newton, and Laplace. In that approach, all reality can be understood in terms of particles (or strings) in motion. A Nobel laureate physicist, Stephen Weinberg said “All explanatory arrows point downward, from societies to people, to organs, to cells, to biochemistry, to chemistry, and ultimately to physics.” “The more we know of the universe, the more meaningless it appears.” However, in this unfolding emergent universe with agency, meaning, values and purpose, we cannot prestate or predict all that will happen. Since cognitive systems rely on perceiving the world by agents, learning from it, remembering and developing the experience of self-awareness, feelings, intentions, and deciding how to control not only tasks but also communication with other agents, and to create new ideas, CI may not only rely on the reductionist approach of describing nature. In fact, CI tries to expand the modelling in order to deal with the emergent universe where no laws of physics are violated, and yet ceaseless unforeseeable creativity arises and surrounds us all the time. This new approach requires many new ideas to be developed, including reformulation of the concept of cognitive information, entropy, and associated measures, as well as management of uncertainty, and new forms of cognitive computing. As we have seen over the last decade, cognitive informatics is multidisciplinary, and requires cooperation between many subjects, including sciences (e.g., cognitive science, evolutionary computing, granular computing, computer science, game theory, crisp and fuzzy sets, mathematics, physics, chemistry, biology, psychology, humanities, and social sciences), engineering and technology (computer, electrical, mechanical, information theory, control theory, intelligent signal processing, neural networks, learning machines, sensor networks, wireless communications, computer networks). Special issues of the IEEE Proceedings and IJCINI are dedicated to cognitive systems with their practical perspectives (April 2009), fundamental issues (May 2009), and cognitive computing (October 2009). International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 7 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. COGNITIVE COMPUTING AND COMPUTER VISION To endow computers with human visual capability is one of the main goals of artificial intelligence (AI) although there still is a long way to go. Taking object recognition as an example in 1980s, a main approach addressing the problem is the 3D reconstruction one, i.e., the reconstruction of 3D object from 2D images. In 1990s since the 3D reconstruction method was confronted with extreme difficulty, most researchers abandoned the attempts and turned to the 2D based approach, i.e., object recognition from 2D images directly. However, the new road is still uneven. When a huge amount of 2D-image data are obtained by digital cameras in object recognition (or classification), they should be transformed into an object invariant representation. In order to solve the problem, we need two key techniques, i.e., a robust detector and an object invariant describer (Zhang & Zhang, 2002). A number of great efforts have been made on these techniques, but so far few efficient solutions have been found. A new direction emerged to solve the problems of computer vision is that computer science may learn some things from cognitive informatics, neuron science, and brain science, which studies what computer vision can learn from human visual principles and how it will be affected by the new interdisciplinary research on computer vision. COGNITIVE COMPUTING AND MACHINE CONSCIOUSNESS The brain is viewed as a computer in which sensory processing, control of behavior and other cognitive functions emerge from ‘neurocomputation’ in parallel networks of perceptron-like neurons. In each neuron, dendrites receive and integrate synaptic inputs to a threshold for axonal firing as output—‘integrate-and-fire’. Neurocomputation in axonal-dendritic synaptic networks successfully accounts for non-conscious (auto-pilot) cognitive brain functions. When cognitive functions are accompanied by consciousness, neurocomputation is accompanied by 30 to 90 Hz gamma synchrony EEG. Gamma synchrony derives primarily from neuronal groups linked by dendritic-dendritic gap junctions, forming transient syncytia (‘dendritic webs’) in input/integration layers oriented sideways to axonal-dendritic neurocomputational flow. As gap junctions open and close, a gamma-synchronized dendritic web can rapidly change topology, evolve and move through the brain (like a benevolent computer worm might move through computer circuits) as a spatiotemporal envelope performing collective integration and volitional choices correlating with consciousness. The ‘conscious pilot’ is a metaphorical description for a mobile, gammasynchronized dendritic web as vehicle for a conscious agent/pilot which experiences and assumes control of otherwise non-conscious auto-pilot neurocomputation. Applications of the conscious pilot in computing have been identified such as a self-organizing mobile agent moving through input/integration layers of computational networks. COGNITIVE INFORMATICS AND WEB INTELLIGENCE Artificial Intelligence (AI) has been mainly studied within the realm of computer based technologies. Various computational models and knowledge based systems have been developed for automated reasoning, learning, and problem-solving. However, there still exist several grand challenges. The AI research has not produced major breakthrough recently due to a lack of understanding of human brains and natural intelligence. Ignoring what goes on in human brain and focusing instead on behavior has been a large impediment to understanding complex human adaptive, distributed reasoning and problem solving. In addition, most of the AI models 8 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. and systems will not work well when dealing with large-scale, dynamically changing, open and distributed information sources at a web scale. In order to develop a new cognitively inspired web reasoning and problem-solving systems, we need to better understand how humans perform complex adaptive, distributed problem solving and reasoning. Understanding the principles and mechanisms of information organization, retrieval and selection in human memory aims to find more cognition-inspired methods of information memory system, problem solving and reasoning at the web scale. Based on many investigations on information retrieval and selection in human memory system, we can view the human brain as a huge distributed knowledge base with multiple information granule networks. In the light of the brain inspired methodology, we need to investigate specifically the following issues: Why humans can give a good answer within a reasonable time by exploring variable preci• sion when receiving a question (i.e., a reasoning problem)? How humans select a suitable level of information granules and retrieve in single or mul• tiple information sources, which is based on a trade-off between user needs and certain constraints? As a result, the relationships between biologically plausible granular reasoning and web reasoning need to be defined and/or elaborated. Granular reasoning describes a way of thinking from the human ability to perceive the real world under various levels of granularity. Granular reasoning provides a solution for web scale reasoning and would have a significant impact on problem solving and reasoning at a web scale. From the viewpoint of granular reasoning, data, information, and knowledge are arranged in multiple levels according to their granularity. A higher level contains more abstract or general knowledge, while a lower level contains more detailed or specific knowledge. Reasoning can be performed on various levels. Results from a higher may be imprecise but can be obtained faster. In contrast, one can move to a lower level to obtain more precise conclusion if more time is allowed. Therefore, granular reasoning offers a multiple-resolution reasoning scheme. One may choose a proper level of granularity to draw a desirable conclusion under certain constraints. In fact, such a reasoning scheme is commonly used by human for practical and real-time decision-making. Therefore, the study on granular reasoning of human and web can be carried out in a unified way from the viewpoint of cognitive informatics and brain informatics. The web granular reasoning is considered as an application of human-inspired granular reasoning. As for human granular reasoning, based on the previous studies about basic-level advantage and its reversal effect, fMRI/ERP can be adopted to investigate how the neural system cooperates and coordinates with each other when the starting point locates in the basic level and how the brain modulates and adapts to the change when the starting point switches to a more general level. The key question is “can we find a new cognitive model for developing human-level web based granular reasoning and problem-solving?” In order to answer this question, we investigate the cognitive mechanism and neural basis of human problem solving and reasoning, for developing new cognitively inspired web intelligence models. Based on this result, we will implement a Problem Solver Markup Language (PSML) for representing, organizing, retrieving, and selecting web information sources with multiple levels of granularity, and develop PSML based web inference engines for personalized wisdom web problem solving and services. International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 9 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. COGNITIVE COMPUTING AND GRAPH INFORMATION PROCESSING Many real-world problems can be represented in graphs and better solved with such a representation. For example, finding the best topological configuration for architectural layout design, finding the best molecular structure for drug discovery or finding the best network topology for overlay network optimization, etc. These problems can be formulated as a combinatorial optimization problem and be solved like many other such problems with an evolutionary algorithm (EA). Even though there are EAs that are developed to evolve trees or artificial neural network architecture, many of them are not developed to deal with graphs in general. To deal with the many problems that are represented by graphs, we propose to use a novel EA. The EA represents graphs in adjacency matrices and makes use of reproduction operators that resemble uniform crossover and mutations in linear-string GA to generate better and better graphs until an optimal or near-optimal solution graphs can be identified. Like human problem solving, this approach can be used to tackle very complex problems. Such a problem can be first broken down into easily manageable sub-problems so that solutions can be found relatively easily. Based on these solutions to the sub-problems, much more complex problems can be solved with the EA based on the use of the sub-problem solutions as building blocks. Very complex real-world problems are usually characterized by the size of the graph generated. When a very large graph is generated, there is a need for them to be understood. Many problems of practical importance such as biological network, social network, web, marketing, and land-use analysis and planning, etc., involve the handling of large graphs. These graph data captures not only the attributes of different objects but also the multiple relationships among them. To understand complex problem solutions, there is often a need to discover patterns in the graph that represent them. To do so, graph mining techniques can be used. Existing graph mining techniques can discover frequent sub-graphs in large graphs but frequent sub-graphs may not always represent interesting patterns. An approach is needed to discover interesting patterns that can uniquely characterize a graph and to allow it to be distinguished from the others so that the uniqueness of a solution representable in a graph can be more easily noticeable. COGNITIVE INFORMATICS AND COMPUTATIONAL LINGUISTICS Language is one of the most complex of all human cognitive activities. Linguistic output, both in spoken and written form, also offers the most tangible example of cognitive activity, in terms of both its quantity and accessibility: “The quality of language that makes it unique does not seem to be so much its role in communicating directives for action as its role in symbolizing, in evoking cognitive images. We mold our ‘reality’ with our words and our sentences in the same way as we mold it with our vision and our hearing” (Jacob, 1982); “The ‘real world’ is to a large extent unconsciously built up on the language habits of the group” (Sapir, 1929). Reinterpreting Jacob (1982) and Sapir (1929), we could state that cognition is the reality molded and modeled by the convention of language. In contrast, the field of Informatics aims to offer an alternative to the imprecise and redundant nature of language, but still needs to deal with natural language’s representational conventions as both the source of information and the as user’s preferred representational interface. The above facts suggest a potential synergy between computational linguistics and cognitive informatics. Should we model ‘Reality’ or the ‘reality molded by language’? Does the modeling of ‘reality molded by language’ facilitate modeling of ‘Reality’? How can cognitive informatics model and express competing preferences, ambiguity and other nuances, in a similar way 10 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. to natural language? Can the cognitive structures conventionalized by languages be effectively extracted based on linguistic facts? Can cognitive informatics and computational linguistics join forces to build an explanatory model of knowledge, as described by Aristotle’s four causes? I believe studies on these provocative issues can lead to productive synergy between cognitive informatics and computational linguistics. COGNITIVE INFORMATICS AND PATTERN THEORY Cognitive informatics, at its foundation, posits information as a fundamental aspect of the universe, parallel in importance to matter and energy (Wang, 2003a). Information is conceptualized in this context as “any property or attribute of the natural world that can be distinctly elicited, generally abstracted, quantitatively represented, and mentally processed by the brain. Informatics is then conceived as the study of the structure and dynamics of information and its interrelation with mass-energy, and is understood to incorporate applied aspects and also fundamental theoretical aspects such as concept algebra, information geometry, etc. An alternate, but conceptually related foundation for the study of cognition is “pattern theory,” as articulated in The Hidden Pattern (Goertzel, 2006b) and utilized as the foundation for AI theories and designs of (Goertzel, 2006a, 2009a). In the “patternist perspective,” the notions of production and simplicity are taken as foundational, and a pattern in X is defined as some f which is simpler than X but produces X. Pattern theory connects to information theory in several ways, one being that it contains algorithmic information theory (Chaitin, 1987) as a special case. The various aspects of intelligence, including memory, learning, perception, action and creativity, are then articulated in pattern-theoretic terms. In pattern theory, the relation between mass-energy and pattern is seen as one of non-foundational inter-containment, meaning that validity is assigned to both of the following perspectives: a) patterns in the universe arise via combinations and interactions of material and energetic entities, b) matter and energy themselves are examples of patterns arising in “universal mind” (cf. Peirce’s aphorism that “matter is mind hide-bound with habit”). In spite of their philosophical differences, cognitive informatics and pattern theory have spawned somewhat related approaches to intelligent systems design; for instance, both focus on the creation of autonomic intelligent systems (Wang, 2004, 2009a). However, pattern theory leads to a greater focus on the emergent patterns that may arise when different cognitive processes interact in the pursuit of common goals and related phenomena such as cognitive synergy (Goertzel, 2009b). COGNITIVE INFORMATICS AND KNOWLEDGE REPRESENTATION Cognitive informatics (Wang, 2002a), initiated by Yingxu Wang and his colleagues in 2002, is born from the marriage of cognitive and information sciences. It investigates into the internal information processing mechanisms and processes of the natural intelligence (i.e., human brains and minds) and artificial intelligence (i.e., machines). Hawkins and Blakeslee (2004) pointed out that there were fundamentally different mechanisms between human brains and machines. Human can focus on different levels during the process of problem solving. This leads to the theory of granular computing (GrC). GrC is a way of thinking that relies on the human ability to perceive the real world under various levels of granularity, in order to abstract and consider only those things that serve a specific interest, and to switch among International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 11 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. different granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a deeper understanding of the inherent knowledge structure. Rough set theory, proposed by Pawlak in 1982, is one of the most important models of GrC (Pawlak, 1987). Most existing rough set models are still processing data with a flat data table while the hierarchical attribute values have not been considered. We extended the traditional rough set theory and proposed a hierarchical rough set model. The hierarchical rough sets can be referred to as a model of cognitive informatics. In hierarchical rough set model, hierarchical attribute values are considered. For each attribute, concept hierarchy is constructed. That is, we extend a single attribute to a concept hierarchy tree by introducing the prior knowledge, and thus each attribute can be processed at multiple levels. We can choose any level for each attribute in terms of the requirement of problem solving so as to discover the knowledge in different levels. From the view of relation, hierarchical rough set model also extends traditional equivalence relation to a nested series of equivalence relations. It is recognized that human thought and knowledge were normally organized as hierarchical structures, where concepts were ordered by their different levels of specificity or granularity (Yao, 2005; Wang, 2008c, 2009e). With granular structures, the triangle summarizes three mutually supporting perspectives: philosophy (structured thinking), methodology (structured problem solving), computation (structured information processing). The theory of knowledge spaces, proposed by Doignon and Falmagne in 1985, can be regarded as another model of cognitive informatics and GrC. It represents a new paradigm in mathematical psychology for knowledge assessment. It starts out with rather simple psychological assumptions on assessing students’ knowledge based on their ability to answer questions. Knowledge spaces may be viewed as a theory of information presentation and information use. The main objective of knowledge spaces is to effectively and economically solve the problem of knowledge assessment. In knowledge spaces, a person’s knowledge states are represented and assessed systematically by using a finite set of questions. A collection of subsets of questions is called a knowledge structure, i.e., a granular structure, in which each subset is called a knowledge state or a cognitive state, i.e., a granule. The family of knowledge states may be determined by the dependency of questions or the mastery of different sets of questions by a group of students. Rough sets and knowledge spaces are two related theories. We introduced two of the central topics in rough sets, approximations and reduction, into knowledge spaces and revealed the strong connection between the two theories based on a common framework of GrC. COGNITIVE INFORMATICS AND SYMBIOTIC COMPUTING Motivation for Symbiotic Computing The growth of the ICT industry enables people to get various kinds of information from the Internet by using high performance computers and advanced network technologies. The recent rapid growth of the ubiquitous technologies provide more convenient services for users, which is expected to lead the traditional ICT society to the ubiquitous society where people can access any information in anyplace and at anytime. On the other hand, problems in the internet society, such as the digital divide, security, network-based crimes, are going to remain as more serious problems to built safe and secure information society. These problems have been caused due to social and human difficulty rather than due to the computer and network technology. For example, people don’t feel comfort if they know that they cannot access important and useful information resources due to lack of 12 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. IT skills, which can be used easily by skillful users. They also may feel hesitation when they want to access web sites to do something useful in an ICT society if they don’t have enough knowledge and skills. A skill-less user probably wants supports by computer systems that know him/her well and has social knowledge for him/her to act in the future ICT society safely and securely. Therefore, new disciplinary is expected to tackle these difficult problems by bringing in the sociality and humanity into computing models. Concept of Symbiotic Computing A Licklider’s view of Man-Computer symbiosis was extended to a view of Neo-Symbiosis in term of the Human Information Interaction (Griffith and Greiter, 2007). In the context of the symbiosis, the quality of life of people in the Real Space (RS) becomes higher if every person couples tightly with the Digital Space (DS) and has a partnership with DS. However, problems of the Internet age such as digital divide, prevent many people from coupling tightly with the Internet. According to the above symbiosis, we define Community-Agents Symbiosis as a relation that: 1) People in a community in Real Space are tightly coupled with a special agent in digital space, and 2) A person and a personal agent keep partnership for him/her to act safely and conveniently in RS and DS. Therefore, the concept of the symbiotic computing is a computing model to built CommunityAgents Symbiosis by bringing in the sociality and humanity into computing models. Symbiotic computing consists of four function models of Perceptual Functions, Social Functions, Cognitive Functions and Decision Functions as shown in Figure 2. Relationship between Symbiotic Computing and Cognitive Informatics One of goals of Symbiotic Computing is to build a Community-Agent Symbiosis between a person and a Partner Agent which recognize one another to help the partner’s activities in RS and DS. Trust between partners will be born when they perceive their existences and intentions mutually. The model of the partner agent is similar as the Layered Reference Model of the Brain LRMB Figure 2. The conceptual framework of symbiotic computing International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 13 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. (Wang, 2007b, 2009c; Wang et al., 2006). The Partner Agent is a multi-agent system which will have the characteristics of the Cognitive Machines (Kinsner, 2007; Wang, 2009c). COGNITIVE INFORMATICS AND GRANULAR MODELING Cognitive informatics and cognitive science have so far involved the main contents of perception, attention, memory, language, thought, appearance, and consciousness. Researchers with different professional backgrounds use different research methods. Granular Computing (GrC) is a new methodology for solving complex problems in the field of artificial intelligence. Its essence is to analyze the problem from different aspects and granularities, and then to obtain a comprehensive or integrated solution (or approximate solution). Taking problem solving as an interactive communication in different levels, it makes problem solving highly efficient (Zedeh, 1997; Yao, 2008). The actual cognition environment is a complicated information world. Different cognition theories and methods are studied from different viewpoints respectively, and they are linked to different subjects or areas. There are several typical cognition models, such as psychology cognition model (Robert & Jeffery, 2005), cognitive informatics models of the brain proposed by Wang (2002a, 2003a), concept cognition model proposed by Zhang and Xu (2007) and ontology cognition model by William and Austin (1999). They have a common characteristic, to indicate that humans cognize things from different aspects and levels. Building a hierarchical structure for complex information (granulation) at first, and then mining the characteristics of things on different granularity levels in the hierarchical structure, people can essentially cognize things, and gradually build an information tree which is a cognition model from outside to inside, from shallow to deep, and from simple to complex. GrC provides a new way to simulate human thinking in solving complicated problems. When people face a chaotic and unlabeled information world, they used to abstract a lot of complex information into some simple concepts in order to analyze and understand them clearly. Yao, Wang, and Zadeh present unifying GrC frameworks on a high-level examination of GrC (Yao, 2008; Wang, 2009h; Wang, Zadeh & Yao, 2009). From the view point of philosophy, GrC tries to abstract and formalize human cognition, and then get a structural thinking pattern. There are three basic elements in cognition mode based on GrC, namely granule, granulation and multi-granularity at multiple layers. Over 90% of the information captured by human beings is visual information. Computer image information processing is one of the key technologies in computer science. However, most traditional image processing methods describe the characteristics of images by image pixel, color and resolution, while not simulate the human brain. Thus, it is difficult for traditional image processing methods to search image in huge data sets efficiently. Human brain can identify images quickly, because human brain gets the characteristics of things from different granularity levels or different aspects, rather than remembers every characteristic. This process is similar to that of complex problem solving based on GrC. At present, human cognition of image based on GrC attracts many researchers attention. For example, the image cognition model is proposed by Zhang (2002), the computational cognitive model is proposed by Shi et al. (2008), and an automatic face modeling algorithm based on the facial structure of knowledge-based threedimensional image plane leaflets is proposed by Wang and Gong (2008, 2009). In the process of image cognition, human brain can acquire information in different granularity levels automatically as follows. The first step is the multi-granularity (multi-layer) description of image information which simulates the process of human brain’s recognition of image in multi-granularity (as well 14 International Journal of Cognitive Informatics and Natural Intelligence, 4(1), 1-29, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. as the zoom-in and zoom-out process). The second step is image information processing. In order to cognize and identify image information, a concept-tree about image information needs to be built, and it is a stable image information structure (such as pyramid-structure). The toplayer image information is about some basic information, and the bottom-layer information is detailed information (such as pixels, texture, resolution, etc.). Human brain usually does not deal with complex information directly, and it deals with the image information in the hierarchical “pyramid” structure in a top down manner. In order to let a computer obtain the cognition abilities like human brain for dealing with image information efficiently, two key problems need to be solved, that is, multi-granularity image description and multi-granularity image processing. Multi-granularity image description should be consistent with the human multi-granularity hierarchical cognition model, and multi-granularity image processing (such as searching and identification) depends on image information representation. COGNITIVE COMPUTING AND GRANULAR COMPUTING

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Perspectives on eBrain and Cognitive Computing

Cognitive Informatics (CI) is a discipline spanning across computer science, information science, cognitive science, brain science, intelligence science, knowledge science, and cognitive linguistics. CI aims to investigate the internal information processing mechanisms and processes of the brain, the underlying abstract intelligence theories and denotational mathematics, and their engineering a...

متن کامل

Perspectives on Cognitive Computing and Applications

Cognitive Computing (CC) is an emerging paradigm of intelligent computing theories and technologies based on cognitive informatics, which implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. The development of Cognitive Computers (cC) is centric in cognitive computing methodologies. A cC is an intelligent computer for knowledge pr...

متن کامل

Perspectives on the Field of Cognitive Informatics and its Future Development

The contemporary wonder of sciences and engineering has recently refocused on the beginning point of: how the brain processes internal and external information autonomously and cognitively rather than imperatively like conventional computers. Cognitive Informatics (CI) is a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science that inve...

متن کامل

Perspectives on Cognitive Computers and Knowledge Processors

Cognitive Informatics (CI) is a contemporary multidisciplinary field spanning across computer science, information science, cognitive science, brain science, intelligence science, knowledge science, cognitive linguistics, and cognitive philosophy. CI aims to investigate the internal information processing mechanisms and processes of the brain, the underlying abstract intelligence theories and d...

متن کامل

Perspectives on Autonomic Computing

The theme of this issue of IJCINI is on Theories and Paradigms of Autonomic Computing. This editorial addresses key notions of this issue on the theoretical framework of autonomic computing as well as implementation and application paradigms of autonomic computing. It presents the structure and coverage of this issue, and highlights the latest advances in cognitive informatics in general and in...

متن کامل

Perspectives on Cognitive Informatics

* This project is supported by High-Tech Programme 863 (2001AA113121), National Natural Science Foundation of China (90104021, 60073019,60173017), and Beijing natural Science Foundation (4011003) Abstract: From a scientific perspective explaining how the brain thinks is a big goal. Cognitive informatics studies intelligent behavior from a computational point of view in terms of updated research...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IJCINI

دوره 4  شماره 

صفحات  -

تاریخ انتشار 2010